Talk:IBM Watson/Archive 1
This is an archive of past discussions about IBM Watson. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
12/14/10 NYT article
I don't feel I am totally neutral on the topic of this article, and I'm not comfortable making anything but minor edits to it, so I wanted to list some information from the most recent New York Times article (already cited in the Watson article) that I think might be worth mention. If someone else would please evaluate these and add the ones they think worth mentioning to the article, that'd be great:
- Watson's opponents will be Ken Jennings and Brad Rutter (currently contained in the cite note for the article; probably worth moving to main article body)
- Prize money = $1million USD, with half to go to charity if a human wins and all to go to charity if Watson wins
- "“Jeopardy” producers said the computer qualified for the show by passing the same test that human contestants must pass."
- "I.B.M. will share some of the highlights of those games on its Web site in the coming weeks." <---perhaps add an EL to the place where the highlights are/will be posted?
keɪɑtɪk flʌfi (talk) 02:48, 15 December 2010 (UTC)
Categories
It looks like Category:Jeopardy! contestants and Category:Contestants on American game shows would now apply for this article, but should they be added now or wait for the episodes to air? Radagast (talk) 13:52, 15 December 2010 (UTC)
- Oh man. It wasn't just a mock thing? They're actually going to air the episodes? Wow. RayTalk 22:05, 15 December 2010 (UTC)
Separate article?
Article title (artificial intelligence software). Watson seems now to refer to a specific computer system. Perhaps the "Jeopardy" machine needs its own page? Or tweak the article title 134.131.125.49 (talk) 16:37, 7 February 2011 (UTC)
Potential sources
Some high-quality articles that can probably provide useful information for this article. These may or may not already be in the article; I haven't checked closely:
- IBM wins a spot on Jeopardy! (Bloomberg)
- I.B.M. Supercomputer ‘Watson’ to Challenge ‘Jeopardy’ Stars (NYT)
- Taking IBM's supercomputer to Final 'Jeopardy' (Q&A) (CNET interview with Watson manager)
- IBM takes on Jeopardy - has AI really got this far?
And some IBM/Jeopardy-run sites with information on Watson:
- IBM's official site for background information on Watson and the team that built it
- "Watson's" official twitter stream
- Jeopardy's press release (and auto-play video) about the competition
keɪɑtɪk flʌfi (talk) 23:13, 19 December 2010 (UTC)
Source of IBM info
Does anyone have the actual source for the (presumed) IBM quote in the Technology section? I added a ref to a page at the UMBC Comp Sci and EE Dept I found through Google, but now I'm wondering if that wasn't just copied from this article. (!) I can't find the complete quote anywhere "official". - dcljr (talk) 23:27, 10 February 2011 (UTC)
- I don't usually add to Wiki articles, but if anyone wants to add/update the specs from here: http://www.ibmsystemsmag.com/ibmi/Watson_specs/35977p1.aspx —Preceding unsigned comment added by 72.70.39.66 (talk) 17:38, 12 February 2011 (UTC)
- One thing missing from that list of information, revealed on the NOVA special, is that Watson is given the correct responses as they are revealed, so as to better be able to hunt for patterns in categories, such as when a category calls for every correct response to be the name of a month. Robert K S (talk) 18:16, 12 February 2011 (UTC)
Removal Of Most of Significance
Removed a majority of this section. It was a play by play of Watson's mistakes during todays match. It did not seem relevant to "significance". It also contained errors. In bringing up the mistake Watson made where he repeated another contestants answer the author claimed this was a programmer oversight. The truth is, as stated on the program, Watson does not process spoken clues. He does not "hear" his opponents' answers. The tone of the section also did not match the rest of the article. —Preceding unsigned comment added by 130.215.71.219 (talk) 02:09, 15 February 2011 (UTC)
Request for removal of Bias
The edits with the following for a diff: "01:43, 15 February 2011 69.255.141.194" should be removed for bias. They are factually misleading, and full of bias. To represent Watson as merely a "Jeopardy playing computer" is a gross understatement. Unfortunately, I'm not terribly comfortable just all-out removing the data myself. If someone with more authority could verify my instinct, that would be great. —Preceding unsigned comment added by 174.45.198.187 (talk)
I agree and added the POV tag to the significance section. Calling it "just a jeopardy playing computer" is crazy. I will rewrite that section if/when I have time, but it won't be for a while so someone else might want to do it. 71.245.122.254 (talk) —Preceding undated comment added 01:55, 15 February 2011 (UTC).
Haha, in the time it took me to reload the page the whole offending section titled "Significance" had been removed. All I can say is good riddance. An actual well thought out analysis of it's significance and importance would be nothing short of a great addition to this article, the previous one was horrible 166.137.12.59 (talk) 03:16, 15 February 2011 (UTC)
Where is the log files of Watson ?
There was a similar case about IBM Deep Blue: "After the loss, Kasparov said that he sometimes saw deep intelligence and creativity in the machine's moves, suggesting that during the second game, human chess players had intervened on behalf of the machine, which would be a violation of the rules. IBM denied that it cheated, saying the only human intervention occurred between games. The rules provided for the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play that were revealed during the course of the match. This allowed the computer to avoid a trap in the final game that it had fallen for twice before. Kasparov requested printouts of the machine's log files but IBM refused, although the company later published the logs on the Internet." and in fact here it is you can find: [1]
From the first game: "It is not connected to Internet, so it can not look up online for help." What I want is no more than in the Deep Blue case: provide the log file to prove the above statement. Robert Gerbicz (talk) 19:52, 15 February 2011 (UTC)
- Does this comment have anything to do with the article, or are you just using the talk page as a forum to request something from IBM? A fluffernutter is a sandwich! (talk) 19:56, 15 February 2011 (UTC)
- We can open an aftermath part of the article after the match. Like in the Deep Blue wikipedia article. There the question of the log files would be an important issue. Robert Gerbicz (talk) 20:02, 15 February 2011 (UTC)
- On Wikipedia, it is only an important issue if it is notable, which mean something that reliable sources have published about. We don't post speculation from random internet users, and we have a strong rule against original research. While you may think it is very important to you, and it may actually be something significant in the grand scheme of things, it only is worthy of mention in the article if it can be sourced to published content, under our verifiability and sourcing guidelines. -Andrew c [talk] 20:55, 15 February 2011 (UTC)
- I have to agree here. On the big differences here is that so far neither Jennins or Rutter have claimed that there was cheating via human interfenece unlike Kasparov.--76.66.180.54 (talk) 00:31, 17 February 2011 (UTC)
- On Wikipedia, it is only an important issue if it is notable, which mean something that reliable sources have published about. We don't post speculation from random internet users, and we have a strong rule against original research. While you may think it is very important to you, and it may actually be something significant in the grand scheme of things, it only is worthy of mention in the article if it can be sourced to published content, under our verifiability and sourcing guidelines. -Andrew c [talk] 20:55, 15 February 2011 (UTC)
- We can open an aftermath part of the article after the match. Like in the Deep Blue wikipedia article. There the question of the log files would be an important issue. Robert Gerbicz (talk) 20:02, 15 February 2011 (UTC)
NSA/FBI wiretap machine
Why no mention of government sales? Hcobb (talk) 13:55, 16 February 2011 (UTC)
- What government sales? Are there any? Described in a reliable source? A fluffernutter is a sandwich! (talk) 22:32, 18 February 2011 (UTC)
Input
How is the input handled? Does Watson get the clue after the last word of it is spoken? Does he get the clue at the same time that the buzzer light goes on? Paul Studier (talk) 02:11, 16 February 2011 (UTC)
- The article does not make it at all clear how Watson receives its input. Is it via sound? Or text already prepared and delivered electronically? Or does it need to visually process written text? Could anyone add details to the article? -84user (talk) 19:39, 17 February 2011 (UTC)
- On one of the first two of the three episodes Alex Trebek said that Watson doesn't hear anything. Alex said that Watson receives the text of the question. Any more info on it, I have no idea. --luckymustard (talk) 20:19, 17 February 2011 (UTC)
- Yes this is correct, Watson is "deaf" and "blind." It receives the clues in electronic text format at the same time the clue is revealed visually to the contestants. Let me see if I can dig up a ref for that... A fluffernutter is a sandwich! (talk) 20:24, 17 February 2011 (UTC)
- Here we go. It's mentioned in any number of articles, but this is the one I pulled up first: "Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set." (http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html) A fluffernutter is a sandwich! (talk) 20:28, 17 February 2011 (UTC)
- Yes this is correct, Watson is "deaf" and "blind." It receives the clues in electronic text format at the same time the clue is revealed visually to the contestants. Let me see if I can dig up a ref for that... A fluffernutter is a sandwich! (talk) 20:24, 17 February 2011 (UTC)
- Thanks I now see at the time of my question that the nytimes cite was already sourcing "received the clues electronically", which has since been improved to "received the clues as electronic texts". For some reason I was trying to follow the information on ibm's website and getting nowhere. -84user (talk) 02:23, 19 February 2011 (UTC)
Percent right?
I don't see a mention of what percentage of Watson's answers were right in either round. The first part of the article gives a clear sense that Watson's main intellectual skill, like most Jeopardy contestants, was in pressing the button really, really fast. But I'd like to know if he matched the 60% correct rating that some online sources suggest is typical of human contestants. Wnt (talk) 06:45, 19 February 2011 (UTC)
Natural Language Algorithms
What algorithms to understand the natural language? Found nowhere in the literature. Best clue is “more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses” [Watson – A System Designed for Answers] which is good only for commercialization. Any relevant indication will be welcome.--Connection (talk) 10:42, 19 February 2011 (UTC)
Conflicting information
There's lots of conflicting info in this article, concerning the amount and usage of RAM and harddisk memory for starters. Somebody should go through it to correct.. —Preceding unsigned comment added by 91.177.4.121 (talk) 15:19, 19 February 2011 (UTC)
Randomly chosen clues
The Preparation section claims, "To counter IBM's claim of bias, the Jeopardy! staff generated their clues by allowing a third party to randomly pick 30 clues from 100 already played games." This doesn't make sense. The Challenge consisted of 2 full games, which would mean 120 clues (plus two "Final Jeopardy!" clues). Plus, previously used clues couldn't be used since Watson had already "seen" all the clues used in all previous Jeopardy! games as part of his development. If this is not referring to the Challenge games, then which ones? The practice games against other former players? There were hundreds of clues used in those. The "practice match" with BR and KJ? That was only 15 questions. I don't understand... - dcljr (talk) 21:05, 21 February 2011 (UTC)
- Good catch, Dcljr. I think that sentence was conflating a bunch of things, none of which were supported by the source tacked onto the sentence. The practice games were selected (not randomly) from previously-played games; the actual games for the exhibition were selected randomly from the games the writers produced for the entire season. I've removed the sentence and added a source for some other things; going to go on a source hunt now to see if I can suss out cites for the two true facts it could have meant. A fluffernutter is a sandwich! (talk) 21:35, 21 February 2011 (UTC)
- But the source does support the fact the the clues were randomly generated by a third party (Baker's word, 12:11 to 12:27). Given IBM's accusation of the show has bias threatened to undermine the reputation of Jeopardy! (Baker's word, 11:42 to 12:11), Jeopardy! reply needs to be pointed out. I suggest reworded the sentence to be more accurately reflect Baker's word and state "To counter IBM's claim of bias, the Jeopardy! staff generated their clues by allowing a third party to randomly pick clues from previously written shows." Jim101 (talk) 00:34, 22 February 2011 (UTC)
- I probably missed it, in that case - noscript blocked the podcast content and I thought I was reading a print article when I checked the source. However, being completely unfamiliar with podcasts and mostly with cnet, I'm still not seeing something on that page called "Baker's word" - can you make your directions to it any more idiotproof for me? The big video in the middle of the page I'm seeing is only 5 minutes long, so I'm assuming that's not what you're referring to... A fluffernutter is a sandwich! (talk) 00:59, 22 February 2011 (UTC)
- Download link...full length podcast should be 30 minutes long. Jim101 (talk) 01:04, 22 February 2011 (UTC)
- I probably missed it, in that case - noscript blocked the podcast content and I thought I was reading a print article when I checked the source. However, being completely unfamiliar with podcasts and mostly with cnet, I'm still not seeing something on that page called "Baker's word" - can you make your directions to it any more idiotproof for me? The big video in the middle of the page I'm seeing is only 5 minutes long, so I'm assuming that's not what you're referring to... A fluffernutter is a sandwich! (talk) 00:59, 22 February 2011 (UTC)
- Huh, you're right, there he is saying it. I privately think he might have gotten his facts wrong, but he's indisputably a reliable source on the topic, so fair enough. How about something like "...the Jeopardy! staff had a third-party select thirty games at random from 100 previously-written ones for Watson to play in" or something along those lines? A fluffernutter is a sandwich! (talk) 01:10, 22 February 2011 (UTC)
Minor point about clue length
In the Operation section, the article says, "human participants were able to use the six to eight seconds it takes to read the clue to decide whether to signal for answering." It seems to me that 6 seconds is actually a bit on the long side for the time it takes Alex to read a typical Jeopardy! clue (I would say 2 to 7 seconds is a more representative range of values). Neither source cited in this paragraph seems to say anything about the matter, so where did the "six to eight seconds" come from? - dcljr (talk) 20:45, 21 February 2011 (UTC)
- Yet the truth is, in more than 20 games I witnessed between Watson and former “Jeopardy!” players, humans frequently beat Watson to the buzzer. Their advantage lay in the way the game is set up. On “Jeopardy!” when a new clue is given, it pops up on screen visible to all. (Watson gets the text electronically at the same moment.) But contestants are not allowed to hit the buzzer until the host is finished reading the question aloud; on average, it takes the host about six or seven seconds to read the clue.
- Players use this precious interval to figure out whether or not they have enough confidence in their answers to hazard hitting the buzzer. After all, buzzing carries a risk: someone who wins the buzz on a $1,000 question but answers it incorrectly loses $1,000.
- Often those six or seven seconds weren’t enough time for Watson. The humans reacted more quickly. For example, in one game an $800 clue was “In Poland, pick up some kalafjor if you crave this broccoli relative.” A human contestant jumped on the buzzer as soon as he could. Watson, meanwhile, was still processing. Its top five answers hadn’t appeared on the screen yet. When these finally came up, I could see why it took so long. Something about the question had confused the computer, and its answers came with mere slivers of confidence. The top two were “vegetable” and “cabbage”; the correct answer — “cauliflower” — was the third guess.
- Ah. I didn't notice the links to the other 7 pages of the article... - dcljr (talk) 20:43, 22 February 2011 (UTC)
Cost
I think I heard a figure of $100 million for the overall cost to IBM on Nova, something on that should be in, unless I missed it. 72.228.177.92 (talk) 23:33, 20 February 2011 (UTC)
- As far as I've seen, they're being very coy about the cost. I've seen references to X number of man-hours or man-years, and some handwaving about "and just think about what that must have cost," but that's the closest I remember reading. A fluffernutter is a sandwich! (talk) 04:07, 21 February 2011 (UTC)
- That measure would do and sounds like you have a source. Actually could be better than a dollar figure. 72.228.177.92 (talk) 00:37, 23 February 2011 (UTC)
- Unfortunately, now that I'm trying to pin down where I got that from, I think it came from IBMers speaking on the topic at a viewing party for the matches - not necessarily reliable, and almost certainly not available in any form I could cite. 20:48, 23 February 2011 (UTC)
DeeperQA?
- As a cook I often need to find recipes and their closely related variations in the same way a lawyer or doctor may need to find cases and their variations.
- In the past I resolved this need by asking the computer questions by means of submitting keywords. Today the method I use is quite different.
- Today I find recipes and their variations by query - not by my query of the computer but by the computer's query of me.
- What makes this better is the time I save repeating keyword submissions to refine responses and find the answer I need.
- What makes this possible is the computers ability to minimize the number of questions it must ask.
- The technique is described here.
DeepQA
I suggest that the Future uses section be split off to DeepQA, since that section is not about future uses of Watson, but about future uses of DeepQA software, of which, Watson is a singular instance. It is not repurposing Watson that is the future use, but implementing other DeepQA systems. DeepQA (edit | talk | history | protect | delete | links | watch | logs | views) was redirected here a few days ago, after having existed as a stub. 65.93.15.125 (talk) 09:22, 24 February 2011 (UTC)
- I just don't feel like IBM's DeepQA project has received coverage in multiple reliable sources apart from those which discuss Watson. Such an article would therefore be ineligible under WP:N. Also, the small amount of content in this section at this time does not motivate a child article being split off. Robert K S (talk) 15:54, 26 February 2011 (UTC)
Hardware
There seems to be a lot of coverage of the hardware used in Watson, but it's not very relevant and borders on promotion of IBM's products (WP:NOTPROMOTION). The operating system and programming languages used (linux, C++, Java) are portable and can run on other high-end platforms. Compare for example with Google (search engine) which does not mention the hardware platform at all. pgr94 (talk) 10:04, 25 February 2011 (UTC)
- Of course, if there is a crucial feature in IBM's hardware that allows Watson to work then that should be made explicit. pgr94 (talk) 10:06, 25 February 2011 (UTC)
- Watson's main innovation was not in the creation of new algorithm for this operation but rather its ability to quickly execute thousands of proven language analysis algorithms simultaneously to find the correct answer. This is why hardware matters. Jim101 (talk) 16:00, 25 February 2011 (UTC)
- So there's a parallel algorithm running multiple language analysis systems and choosing between the results. That's software. It is a common misconception that hardware is important. But if the code is portable it'll run on many other modern supercomputers (e.g. Top 500 Supercomputers). Perhaps Watson's hardware is notable in some way? Is it the most powerful supercomputer? The most RAM? Specialised processors? pgr94 (talk) 20:51, 25 February 2011 (UTC)
- It is not our job to decide what is interesting/important what is not. The point is that a lot of reliable sources said the software cannot be implemented without using the current hardware IBM built, otherwise IBM would done this in 2006. Sources state "If the firm focused its computer firepower — including its new “BlueGene” servers — on the challenge, Ferrucci could conduct experiments dozens of times faster than anyone had before, allowing him to feed more information into Watson and test new algorithms more quickly...One important thing that makes Watson so different is its enormous speed and memory." You can keep on arguing that Watson is independent on the hardware configuration and can fit in other computers, but so far I don't see any other sources that support your POV. Jim101 (talk) 22:25, 25 February 2011 (UTC)
- So there's a parallel algorithm running multiple language analysis systems and choosing between the results. That's software. It is a common misconception that hardware is important. But if the code is portable it'll run on many other modern supercomputers (e.g. Top 500 Supercomputers). Perhaps Watson's hardware is notable in some way? Is it the most powerful supercomputer? The most RAM? Specialised processors? pgr94 (talk) 20:51, 25 February 2011 (UTC)
- Watson's main innovation was not in the creation of new algorithm for this operation but rather its ability to quickly execute thousands of proven language analysis algorithms simultaneously to find the correct answer. This is why hardware matters. Jim101 (talk) 16:00, 25 February 2011 (UTC)
- A description of the hardware upon which a famous computer system runs does not "border on promotion". It is important factual information proper to an encyclopedia and worth safeguarding as part of the historical record. Imagine if Wikipedia were operating in 1960 and we discarded technical detail about the computer systems then in existence solely for the reason that they were then in existence. Or, if that is too distant, imagine it is 1997 and we excluded from the encyclopedia technical information about Deep Blue's hardware. We would be poorer today for it. Robert K S (talk) 16:09, 25 February 2011 (UTC)
- Please see software portability. It's the software that makes Watson interesting and notable, not the hardware. Based on the description of Watson it is possible to swap the hardware to some other supercomputer without any significant difference. Deep Blue's hardware was notable because it was specially designed for the task: "480 special purpose VLSI chess chips." pgr94 (talk) 20:51, 25 February 2011 (UTC)
- Watson is not "portable" software; there is no other hardware system in existence that can run Watson besides the one assembled and configured to run Watson for Jeopardy! That system happened to comprise commercially available computers, but that in itself is a notable fact. Robert K S (talk) 22:43, 25 February 2011 (UTC)
- According to the article description it's based on programming languages that port to other processors. Do you have some additional information? Which part do you think won't work on another processor type? pgr94 (talk) 23:03, 25 February 2011 (UTC)
- Nothing to do with processor type; Watson is built in Java. Everything to do with the fact that Watson needs extraordinary parallelism in order to function, and it doubtless employs a significant amount of code in order to harness and manage that parallelism. Robert K S (talk) 23:49, 25 February 2011 (UTC)
- The "significant amount of code in order to harness and manage that parallelism" is called Hadoop and it is portable software. All the evidence points to Watson's software being portable code. I am not against the article including a description of the hardware, but it shouldn't be given more weight than the software and databases. pgr94 (talk) 22:43, 3 March 2011 (UTC)
- Nothing to do with processor type; Watson is built in Java. Everything to do with the fact that Watson needs extraordinary parallelism in order to function, and it doubtless employs a significant amount of code in order to harness and manage that parallelism. Robert K S (talk) 23:49, 25 February 2011 (UTC)
- According to the article description it's based on programming languages that port to other processors. Do you have some additional information? Which part do you think won't work on another processor type? pgr94 (talk) 23:03, 25 February 2011 (UTC)
- Watson is not "portable" software; there is no other hardware system in existence that can run Watson besides the one assembled and configured to run Watson for Jeopardy! That system happened to comprise commercially available computers, but that in itself is a notable fact. Robert K S (talk) 22:43, 25 February 2011 (UTC)
- Please see software portability. It's the software that makes Watson interesting and notable, not the hardware. Based on the description of Watson it is possible to swap the hardware to some other supercomputer without any significant difference. Deep Blue's hardware was notable because it was specially designed for the task: "480 special purpose VLSI chess chips." pgr94 (talk) 20:51, 25 February 2011 (UTC)
As an AI researcher, I just see people continually obsess over hardware when in a few years it'll run on a PDA (cf Pocket Fritz). At the same time, the truly notable parts (in Watson's case the parallel algorithms and the knowledgebases) are largely ignored. IBM probably doesn't want to reveal information about its algorithms and instead talks about its product line. So all this processor-talk is really advertising and we need to be careful of falling foul of WP:UNDUE. I should add that I have no connection to any hardware manufacturer or vendor. pgr94 (talk) 23:03, 25 February 2011 (UTC)
- Well, if another book/paper were published that gave detailed description of Watson's algorithm and states Watson can be installed on another computer and achieve 6-8 second reaction time and we purposely ignore it, then it is WP:UNDUE. As of now, Wikipedia is not a place for speculations or a a crystal ball. Jim101 (talk) 23:14, 25 February 2011 (UTC)
- See Figure 6 of this article [2]. Can we use that image? pgr94 (talk) 22:47, 3 March 2011 (UTC)
From the article: Watson is made up of a cluster of ninety IBM Power 750 servers (plus additional I/O, network and cluster controller nodes in 10 racks) with a total of 2880 POWER7 processor cores and 16 Terabytes of RAM. Each Power 750 server uses a 3.5 GHz POWER7 eight core processor, with four threads per core. This informs the reader how powerful a computer has to be to run Watson. IMHO, this is much more tangible than just saying how many instructions per second it does. Paul Studier (talk) 23:35, 25 February 2011 (UTC)
- According to Tony Pearson, the IBM consultant this article references in the next paragraph, that is actually "how powerful a computer has to be to run Watson to answer questions within Jeopardy! time constraints". Pearson says that the software has, and can, run on a single Power 750 server, but it takes hours to form an answer. (Scaling the hardware based on time and cost budgets would be reasonable for a variety of uses.) This comes back to basic questions about how Wikipedia describes Watson (a quiz show contestant, an IBM product, research/software) and how Wikipedia uses primary sources. —Mrwojo (talk) 01:05, 26 February 2011 (UTC)
- The source also said "Thus the 90 that make up Watson would cost about $3 million." IMO that means 90 Power 750s is equal to the entire system called Watson the Jeopardy! player. One server running Watson's program is something else entirely. Jim101 (talk) 01:37, 26 February 2011 (UTC)
I guess we need to answer a few questions first:
- Would somebody with no experience in PowerPC processors understand, from the description "Watson is made up of ... with a total of 2880 POWER7 processor cores", how powerful Waston is in terms of computation power? I think a more useful description is "Waston has 2880 POWER7 processor cores. It can perform NNN integer operations per second, which is equivalent to the computation power of XXX Intel YYY processors or 1/X of the YYY super-computer".
- Is Watson significant because it can play a TV game show very well or because it shows us the advances in Artificial Intelligence? (Make sure you skim the article and know what fields of research AI covers before you comment).
- In the latter case, I think the hardware configuration is not that relevant and including trade marks of the machines/processors does seems to be an ad.
- As long as the AI algorithm is computable, we can implement it on any other Turing complete computer, so binary compatibility or portability is not really an issue.
- However, if people believe "a computer that can Jeopardy!" is the more important message here, the hardware configuration does show the minimum amount of MIPS and RAM that is required and in this case is more important.
- Alternatively, consider which is the lesson we should learn from Watson: i) in a near future (consider Moore's Law), we will have a piece of software that runs on your desktop computer that can understand some natural language questions and find the answers; OR ii) IBM can build a powerful computer system from their commercially available servers and processors.
- In other words, I think for the hardware part, we only need to state the fact that we can now assemble enough CPU power, provide enough storage and bandwidth to make the software run with reasonable performance. IBM didn't not do this alone. For example, IBM probably used off-the-shelf RAM modules.
The three points above --Bill C (talk) 16:22, 27 February 2011 (UTC)
- Not sure Tony Pearson's blog qualifies as a reliable source: "as a supercomputer, the 80 TeraFLOPs of IBM Watson would place it only in 94th place on the Top 500 Supercomputers". pgr94 (talk) 17:15, 3 March 2011 (UTC)
Was Watson electronically notified when to buzz?
The following statement was removed Feb 19 by 66.108.143.26 without adequate explanation:
- "Also, Watson could avoid the time-penalty for accidentally signalling too early, because it was electronically notified when to buzz, whereas the human contestants had to anticipate the right moment."
I restored it on Feb 26, 2011 because such a statement is critically important in comparing Watson's performance with that of a human. Roesser (talk) 03:52, 27 February 2011 (UTC)
- I don't think I was the person who removed it originally, but I'm off to re-remove it now, because that is a pretty-close-to-false statement. It implies that Watson was notified of the buzzer opening while the humans weren't; in fact, both parties were notified - Watson electronically, humans visually, with a light. Neither party had to "anticipate" or guess. A fluffernutter is a sandwich! (talk) 13:30, 27 February 2011 (UTC)
- How can you say the statement is pretty-close-to-false when you admit that it is true - Watson was notified electronically? That made all of the critical difference, since Watson could respond in microseconds to an electronic signal, but the humans required tenths of seconds to visually perceive the light before they could activate their neural-muscular system. This is a very important issue (at least to me) since it lies at the heart of the whole article. If the issue is ignored, a disservice is committed to the readers. BTW, you are not the person who removed it originally, unless you are secretly 66.108.143.26. Roesser (talk) 16:28, 27 February 2011 (UTC)
- The statement should stand, with some modification, IMO. What Watson could or could not do, or what the humans could or could not do, is immaterial, as compared to what Watson did do: Watson always buzzed in as soon as the trigger signal was sent electronically if its response confidence was above the threshold calculated for that game state. Watson never "anticipated", as Watson was not programmed to do so. (This, of course, meant that the humans were forced to anticipate in order to ever get in ahead of Watson, and were not successful at doing so.) Robert K S (talk) 16:55, 27 February 2011 (UTC)
- When the statement was removed by 66.108.143.26, three supporting references were also removed, which are .[1][2][3] [4] These references should also be restored along with the statement so future editors don't come along and claim that the statement is unsupported. Roesser (talk) 17:53, 27 February 2011 (UTC)
(edit conflict) :Roesser, I was saying that the part about humans "having to" anticipate was false, not that the part about Watson receiving notification electronically was. I actually thought the fact that Watson was notified electronically was already in the article, but I just did a quick skim and didn't see it. I would agree with that being an important fact that ought to be there; I just disagree with framing it as "watson got a notification while the humans didn't," because in fact they both did. The fact that Watson's reaction time was generally faster than the humans is already in the article elsewhere. A fluffernutter is a sandwich! (talk) 18:22, 27 February 2011 (UTC)
It looks like we might be converging. Suppose we rewrite the statement to something like this:
- "There is concern that Watson had a time advantage because it was electronically notified when to buzz, whereas the humans were notified by means of an illuminated indicator, which requires several tens of a second to visually perceive that it is lit. To compensate, the humans tried to anticipate the light and often failed."
Which I believe address your concerns. BTW, your last post was marked with an editconflict. Do you know why? Roesser (talk) 20:52, 27 February 2011 (UTC)
- The "There is a concern that" part is WP:WEASEL, and "time advantage" is not right; Watson had the same amount of time as the contestants (if anything Watson was time disadvantaged because it took longer for Watson to converge on the correct response than contestants--witness Watson's performance on clues that took Alex <3 seconds to read). Watson's advantage isn't in time or even timing but in reliability of timing of its response. It could reliably ring in an instant after receiving the signal--the standard deviation of the time between the signal and buzz was much lower for Watson than for human players. Also "often failed" isn't sufficiently descriptive of the problem. Robert K S (talk) 23:45, 27 February 2011 (UTC)
Hmmm, now we seem to be diverging. Ok suppose we try something like this:
- "There is a critical difference between the way that the humans were notified to buzz and that for Watson. The Humans were notified by an illuminated indicator, requiring tenths of a second to visually perceive; whereas Watson was notified by an electronic signal, requiring no perception time. Watson, therefore, could respond within microseconds of the signal and did so if sufficiently confident of the answer by that time. The humans tried to compensate for the perception delay by anticipating the light, but the variation in the anticipation time was generally too great to fall within Watson’s response time." Roesser (talk) 01:31, 28 February 2011 (UTC)
- I still think you're synthesizing a bit, Roesser. We would need sources stating that tenths of a second were what was required, and that there was no perception time to process the signal required by Watson, that the humans specifically tried to compensate for Watson by buzzing differently, and that the variation was "too great." Without such sources, the most we can say is a fact-based thing like, "While the humans were notified of the buzzer opening visually, with a set of lights, Watson was notified by an electronic signal and did no visual processing." A fluffernutter is a sandwich! (talk) 02:34, 28 February 2011 (UTC)
- We can say more than that, fluffer. We know that Watson never tried to anticipate (was not programmed to), and that the humans did. (IBM and both the humans have all talked about this.) Robert K S (talk) 02:56, 28 February 2011 (UTC)
- Yes, those are fair game too, Robert. A fluffernutter is a sandwich! (talk) 02:57, 28 February 2011 (UTC)
Fluffer, all parts of my last proposed statement, including perception time, anticipation, and variation are supported by the three references I included before and a fourth I am now adding concerning reaction time. Robert, I agree with your last post and it is consistant with the proposed statement. Roesser (talk) 03:30, 28 February 2011 (UTC)
- Ah, if those parts are all clearly supported by the sources, then I have no objection. I was going by my memory of those facts not having been mentioned in stuff I read, rather than checking the sources you're proposing. Guess my memory is fallible, who knew?! A fluffernutter is a sandwich! (talk) 20:15, 28 February 2011 (UTC) ETA after reading source names: Just one point, not sure if you're aware - wikipedia cannot be used as a source for itself, so we can't use a Wikipedia article to source anything regarding timing, etc. We can of course, however, use a source that's also used in another Wikipedia article, if one is available. A fluffernutter is a sandwich! (talk) 20:17, 28 February 2011 (UTC)
Fluffer, thanks for the tip about wikipedia not being used as a self-reference. I suggest the 4th reference be replaced with,[5] which is one of its references. Also, I suggest Robert k's last post be folded into the statement so that it would then read:
- "There is a critical difference between the way that the humans were notified to buzz and that for Watson. The Humans were notified by an illuminated indicator, requiring tenths of a second to visually perceive; whereas Watson was notified by an electronic signal, requiring no perception time. Watson, therefore, could respond within microseconds of the signal and did so if sufficiently confident of the answer by that time. The humans tried to compensate for the perception delay by anticipating the light, but the variation in the anticipation time was generally too great to fall within Watson’s response time. Watson never did anticipate the notification signal."
I'll wait a bit and then post this statement with the references. Roesser (talk) 20:58, 28 February 2011 (UTC)
- The statement is now posted. Roesser (talk) 03:09, 1 March 2011 (UTC)
References
- ^ "Jeopardy! Champ Ken Jennings". The Washington Post. February 15, 2011. Retrieved 2011-02-15.
- ^ "IBM Computer Faces Off Against 'Jeopardy' Champs". Talk of the Nation. National Public Radio. February 11, 2011. Retrieved 2011-02-15.
- ^ Alex Strachan (February 12, 2011). "For Jennings, it's a man vs. man competition". The Vancouver Sun. Retrieved 2011-02-15.
- ^ Mental chronometry Wikipedia article explaining and quantifying components of human reaction time
- ^ Kosinski, R. J. (2008). A literature review on reaction time, Clemson University.
I was the one that removed the original statement, because it is purely WP:SYN, and I see the statement got reinserted without even trying to solve the problem. This is the current version of the statement:
- The Jeopardy! staff used different means to notify Watson and the human players when to buzz, which proved to be critical.
Where is the source for this sentence?
- The humans were notified by a light, which took them tenths of a second to perceive (see Mental chronometry)[26] Watson was notified by an electronic signal, which required no perception time.
This is where the most problem arises from. we have first part of the sentence only supported the fact that human reaction to flashes (not during the games) is tenths of a second. The latter part supported by a popular myth that electric signals travels instantly, ignoring the fact that the algorithms and the length of wire can delay the electronic signals by seconds in circuit construction, and that we don't know the detailed construction of Watson. Furthermore, there is currently no RS the compared the reaction times by measuring seconds, yet somehow Wikipedia attempted to do this. A big case of WP:SYN and WP:OR that got this statement removed in the first place.
- Therefore, Watson could respond within microseconds of the signal, and did so if sufficiently confident of the answer by that time.
This statement is a conclusion based on the above sentence. It is original research to to combine idea A with B to conclude C, and this is made worse by the fact that idea B is also based on original research.
- The humans tried to compensate for the perception delay by anticipating the light, but the variation in the anticipation time was generally too great to fall within Watson's response time.[25][27][28]
This is the only sentence that is properly sourced. Actually, this is a bit shaky too. Source 25 is Jenning's own opinion, and since he is a participant in the game, his observation only counts as opinion. Source 27 only said Watson is quick in the draw, but nothing about perception delay and the variation in the anticipation time. Source 28 is the closest to support the statement, because Mr. Baker said "Yes, but the computer works differently than people. People anticipate the end of Alex Trebek's sentence, at which point they can buzz. And Watson doesn't have that anticipation, but as soon as the light goes on, Watson, if it knows the answer and has confidence in it, buzzes almost instantly." But still nothing on the variation in the anticipation time.
- Watson never anticipated the notification signal.
Same problem, no source. There are also presentation problems with weight and POV, but I won't raise them until sourcing issue is resolved. Jim101 (talk) 03:53, 3 March 2011 (UTC)
- Jim, three of us (see above) spent considerable time trying to solve the "problem", which I believe is properly citing the statement not its veracity. In response to your criticism, I have reworded it slightly and added some references. Regarding weight and POV, I believe the underlying issue is very important to the article and needs to be presented, otherwise the credibility of the article will be threatened. Roesser (talk) 01:10, 4 March 2011 (UTC)
- NPOV is purely dependent on sourcing and contexts, not what we think what is important or what is not. If the sourcing issue is resolved, the the content will become automatically neutral (provided you don't just provide a reword version of the information already stated in Operation and Preparation section). On the same note, the citation "Jeopardy! Champ Ken Jennings", The Washington Post, 2011-02-15, http://live.washingtonpost.com/jeopardy-ken-jennings.html, retrieved 2011-02-15 is a source of notable opinion, not as a source for fact. Either present the source content as opinion or remove the statement it support. Jim101 (talk) 17:33, 4 March 2011 (UTC)
- And another note, in the Operation section there is already a statement "Part of the system used to win the Jeopardy! contest was the electronic circuitry that received the "ready" signal and then examined whether Watson's confidence level was great enough to activate the buzzer. Given the speed of this circuitry compared to the speed of human reaction times, Watson's reaction time was faster than the human contestants except when the human anticipated (instead of reacted to) the ready signal." Is there a reason the putting two different versions of the same statement in a single section? Jim101 (talk) 17:43, 4 March 2011 (UTC)
- You and I have consensus on everything upto this "another note". The two "versions" are not of the same statement. The electronic circuitry of the first refers to the logical process of examining Watsons confidence level when triggered by the "ready" signal. It does not implicate that the ready signal is generated by two different mechanisms and that the mechanism for the humans involves perception time but not for Watson.Roesser (talk) 18:16, 4 March 2011 (UTC)
- First of all, no RS compared perception time (i.e., signal to brain time) directly by measuring seconds, they measure reaction time (i.e. signal to buzzer time), so going into more details than the basic fact "Watson perceives faster than human" is OR. Second, the backstage switch that tripped the ready signal is part of the logic circuit, and the logic circuit also contributed to the "activate the buzzer within about eight milliseconds" comparison. There is a significant overlap in the information provided those two statements to raises WP:UNDUE concern, and that overlap must be eliminated. I would suggest a merge between those two statements. Jim101 (talk) 23:17, 4 March 2011 (UTC)
- The back-stage switch can not be considered part of the circuitry discussed as part of Watson, since it was implemented by Jeopardy! (perhaps you are synthesizing this?) All we know is that it was activated simultaneously with the light seen by the humans. Watson does not perceive faster than humans, because Watson does not perceive at all. Perception is a rather involved process of detecting the illumination of the lamp performed only by the humans. Watson was directly provided with the output of perception and therefore was spared any time delay in performing that task. This is clear from the many references given. Perception is fundamental to the difference in the way Watson and the humans were notified when to buzz. Any statement addressing this issue, therefore, can not avoid perception nor its effect on time delay. The fact that Watson was able to activate the buzzer within eight milliseconds is not attributable to any superior ability of Watson to react faster, but rather to the arbitrary advantage that it was given. You suggest a merge between the two statements, but I think they are addressing two different issues. The prior statement, I believe, was addressing delay arising from Watson reconciling its state of confidence in response to the notification signal, rather than any detection time of the signal. This prior statement should perhaps be clarified, but I don't think they can be merged. Roesser (talk) 00:53, 5 March 2011 (UTC)
- First of all, no RS compared perception time (i.e., signal to brain time) directly by measuring seconds, they measure reaction time (i.e. signal to buzzer time), so going into more details than the basic fact "Watson perceives faster than human" is OR. Second, the backstage switch that tripped the ready signal is part of the logic circuit, and the logic circuit also contributed to the "activate the buzzer within about eight milliseconds" comparison. There is a significant overlap in the information provided those two statements to raises WP:UNDUE concern, and that overlap must be eliminated. I would suggest a merge between those two statements. Jim101 (talk) 23:17, 4 March 2011 (UTC)
- You and I have consensus on everything upto this "another note". The two "versions" are not of the same statement. The electronic circuitry of the first refers to the logical process of examining Watsons confidence level when triggered by the "ready" signal. It does not implicate that the ready signal is generated by two different mechanisms and that the mechanism for the humans involves perception time but not for Watson.Roesser (talk) 18:16, 4 March 2011 (UTC)
- This is the closest source that tried to describe what are you talking about, but still not a peep about the "the concept of perception in AI" as you are trying to define here by declaring "machine cannot perceive thus the advantage". The point I don't understand is, if a circuit is used by Watson to detect an electronic ready signal arriving from a terminal, how can it be true that Watson was directly provided with the output of a perception? Using eyeballs detecting light in order to make a decision counts as perception, but using a circuit to detect electric impulse in order to make a decision is not? Who came up with this definition in the first place? And why do we even need to arbitrarily define the concept of perception in order to do a comparison in the first place?
- As for the circuit, the source also stated that the back stage ready signal switch is directly hooked to Watson's circuit, and I believe it implied that Watson will not function unless the switch is attached, so IMO technically it is part of Watson's construction. Anyway, the point is that although sources did state perceptions differences (not lack of perception) played a role, it is the entire period from flipping the signal switch to decision making to pressing the buzzer that actually got measured and compared. Trying to define the philosophical concept of perception and you are opening another can of worms. Jim101 (talk) 09:04, 5 March 2011 (UTC)
- That Huffington Post article at first glance does seem to describe the issue that I'm talking about, but it's a red herring. The author is really trying to make a considerably different and philosophical point that too much information can have a negative effect on decision or reaction time. IMHO his reasoning is fallacious and his conclusion is wrong. But that's beside the point. Human perception is not just philosophical it's real. I didn't make it up, it's well established theory in physiology with much documentation. I cited one study from a RS, but there are plenty more. It's important to bring up human perception because its at the crux of the matter. Without considering perception the arbitrary advantage given to Watson could not be expressed. So if a RS can be referenced for the effect of human perception than there is no reason not to bring it up in the discussion. On the other hand, Watson reacting to the electronic signal, can not be considered perception as physiologically defined. Watson's reaction to the signal is a very simple process that occurs at a much smaller time scale than that of perception. BTW, when I referred to the "output of perception" I meant information similar to that produced by human perception.
- I believe our whole discussion is based upon which types of RS are admissible. You seem to refer to RS that directly relate to Watson and its performance on Jeopardy! However, I think RS of all types that either directly or indirectly support article statements are admissible. Is it OR to break candidate statements into parts that are defensible by RS and then to use deduction to combine the parts to form the statement? In simpler terms, is deduction OR? I don't think deduction is research at all. Roesser (talk) 18:28, 5 March 2011 (UTC)
Watson's avatar
Watson's avatar graphic was changed from jpg to svg, but then temporarily reverted because the svg version had minor flaws. IMHO, the reversion should be made permanent. The avatar itself is notable because it appeared on television during the contest. Its representation in this article should be as faithful as possible. However, the conversion of a raster image to vector form will almost always incur discrepancies as in this case. Furthermore, the file size is bigger for the svg version than the jpg version. (22 kb vs 15 kb). BTW, Nicky Nouse, what happened to the Island of Cuba? Roesser (talk) 15:14, 7 March 2011 (UTC)
- The SVG version looks like it was a fun art project for someone, but should not be kept. It does not represent the raster version, which it differs from in many ways. Robert K S (talk) 22:10, 7 March 2011 (UTC)
Toronto a Canadian city?
From the text: "It is also of note that in the Final Jeopardy category U.S. Cities it answered Toronto a Canadian City."
Yeah, it IS also a Canadian city, but in USA there is also Toronto city, more than one. Use wikipedia! It is so bad to see such a mistake. Robert Gerbicz (talk) 08:01, 16 February 2011 (UTC)
- This has been added and reverted a few times. As you say, there are several U. S. cities named Toronto, and the fact is that we don't know what sort of associations made Watson choose the answer it did. I've reverted this again, and I'll add a hidden comment to hopefully discourage readding the claim that Watson must have bungled its geography. — Gavia immer (talk) 08:10, 16 February 2011 (UTC)
- The largest US Toronto seems to be a suburb with less than 6,000 people. I guess it's possible it came to that town, or one of the smaller US Toronto's, but I'm highly skeptical. At the very least most viewers assumed it meant Canada's Toronto as did, pretty clearly, Trebek himself.--T. Anthony (talk) 22:05, 16 February 2011 (UTC)
- They might have assumed that, but there is no way for anyone to have evidence about it. Also, given the subject matter, it's important to avoid language that makes conclusory assumptions, such as that a computer program "meant" or "believed" things. What we know is that the program built a string based on a search of raw data. — Gavia immer (talk) 22:18, 16 February 2011 (UTC)
I'm not going to change anything regarding this subject due to the nature. However I thought it relevant ot the discussion to state that while there are "Toronto's" in the US, none of them have one airport, yet alone two. With respect to the question asked, the only logical answer, to man or machine, is Toronto, Ontario. See http://ca.news.yahoo.com/blogs/dailybrew/supercomputer-watson-doesn-t-know-toronto-isn-t-20110216-084705-695.html. The other Toronto's likely muddied the water, along with the Toronto Blue Jays being in the "American" League. Even IBM employees accept he was referring to Toronto, Ontario. See http://asmarterplanet.com/blog/2011/02/watson-on-jeopardy-day-two-the-confusion-over-an-airport-clue.html. Maybe information in the reference articles could be useful. 99.245.165.37 (talk) 02:19, 17 February 2011 (UTC)
- Trebek stated in the introduction of the 16 February airing that he was surprised to hear that "Toronto is now a US city". Mindmatrix 13:33, 17 February 2011 (UTC)
- I agree with the IBM people saying that they accept that he was referring to Toronto, Ontario. However, I think at some point, with maybe either enough research, or higher level investigation, someone could come up with the proof for that I believe. That is, that it seemed to me like Watson didn't utilize the category title when coming up with his answers. Thoughts? --luckymustard (talk) 22:41, 17 February 2011 (UTC)
- It is likely true. But it would need to be cited to be included in the article per nor. meshach (talk) 00:15, 18 February 2011 (UTC)
- Yes, that is my largest concern - that editors are adding unsourced assumptions about the Final Jeopardy answer. I have no problem with adding sourced content, such as Trebek's comment in the second game. — Gavia immer (talk) 00:34, 18 February 2011 (UTC)
- It is likely true. But it would need to be cited to be included in the article per nor. meshach (talk) 00:15, 18 February 2011 (UTC)
- I just found this - http://asmarterplanet.com/blog/2011/02/watson-on-jeopardy-day-two-the-confusion-over-an-airport-clue.html - where a seemingly paraphrasing of David Ferrucci, of IBM, says "How could the machine have been so wrong? David Ferrucci, the manager of the Watson project at IBM Research, explained during a viewing of the show on Monday morning that several things probably confused Watson. First, the category names on Jeopardy! are tricky. The answers often do not exactly fit the category. Watson, in his training phase, learned that categories only weakly suggest the kind of answer that is expected, and, therefore, the machine downgrades their significance. The way the language was parsed provided an advantage for the humans and a disadvantage for Watson, as well. “What US city” wasn’t in the question. If it had been, Watson would have given US cities much more weight as it searched for the answer. Adding to the confusion for Watson, there are cities named Toronto in the United States and the Toronto in Canada has an American League baseball team. It probably picked up those facts from the written material it has digested.". I'd be glad to give it a go at editing the article to include this source and the important part of it that could be in this Watson article. Let me know your further thoughts. Thanks! --luckymustard (talk) 03:07, 18 February 2011 (UTC)
Toronto Ontario does not satisfy the clue let alone the category. I'm curious why Ferrucci and other experts are trying to excuse Watson's behavior based just on the category. Can anyone explain? This should be brought out in the article if a reliable source can be found.Roesser (talk) 17:40, 18 March 2011 (UTC)
- Watson did not specify Toronto, Ontario. Everyone assumes the Canadian city is what it meant because that's the best known Toronto. It simply said "Toronto" and both the string of question marks following and its low bid indicated it "knew" that probably wasn't correct. We can safely assume that if that were a regular question Watson would not have buzzed in and it only provided an answer because it was forced to by the game rules. Darker Dreams (talk) 06:00, 19 March 2011 (UTC)
Requested move
- The following discussion is an archived discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.
The result of the move request was: move to another name Watson (computer). Graeme Bartlett (talk) 04:15, 17 April 2011 (UTC))
Watson (artificial intelligence software) → Watson (artificial intelligence system) Watson (artificial intelligence) — Relisted. Vegaswikian (talk) 20:53, 3 March 2011 (UTC) Rerelisted to ajjow focus on alternatives. Andrewa (talk) 21:26, 11 March 2011 (UTC) Based on the article's own description, Watson seems to include the custom hardware setup involved. Thus, calling Watson just "software" is inaccurate. I suggest "system" as an alternate, more accurate description. --Cybercobra (talk) 20:20, 24 February 2011 (UTC)
- While I don't necessarily support or oppose this move, is there a reason why it couldn't just be titled "Watson (Artificial Intelligence)" to bypass the issue? A fluffernutter is a sandwich! (talk) 20:49, 23 February 2011 (UTC)
- Even better! Request now amended. --Cybercobra (talk) 20:20, 24 February 2011 (UTC)
- rename to Watson (artificial intelligence), shorter, and still appropriate. 65.93.15.125 (talk) 09:23, 24 February 2011 (UTC)
- Rename to
Watson (computer system) orWatson (computer). There have been questions as to whether Watson can really be described as "artificial intelligence". As noted, Watson wasn't designed to pass the Turing test, and certainly does not represent anything like "AI" in its science fiction sense, or even a rudimentary form of it. It is simply a hypothesis-generation/evidence-gathering/confidence-assigning system designed to answer one form of open-ended question. However, I agree that "Watson" as generally described in sources is more than "software" and involves specialized hardware. In light of the above "computer system" seems the most apt descriptor to use for disambiguation. Robert K S (talk) 21:57, 24 February 2011 (UTC)- Comment almost all AIs are not designed to pass the Turing test. Very few AIs are designed to meet the Turing test. 65.93.15.125 (talk) 23:27, 24 February 2011 (UTC)
- Rename to Watson (question answering system). That is an accurate description of what it is (see question answering system); plus calling it an AI is controversial. pgr94 (talk) 09:55, 25 February 2011 (UTC)
- Comment where is it controversial? It uses AI rules and machine learning rules to formulate responses to interrogations, just like many commerical AI-systems available. 65.95.15.144 (talk) 05:35, 4 March 2011 (UTC)
- We'd probably all agree that Watson uses techniques from the field of artificial intelligence, but to call Watson an AI is opening a can of worms. Watson is unlikely to pass the Turing test and there is no other widely accepted test for artificial intelligence. Watson can be described as doing pretty well in a subject matter expert Turing test, but that's still not strong AI. See Philosophy of AI for more. pgr94 (talk) 09:30, 4 March 2011 (UTC)
- Passing the Turing Test is something that no AI does, virtually no AIs are designed to even function in a way as useless as engaging in chitchat and smalltalk. AIs are around in the world today, all over the place, even in software for your X-Box, these real world AIs are not designed to do anything like Turing Test tasks, which are costly ivory-tower research projects that don't make money. Hell, if you designed an AI to act exactly like a dog, it will never pass the Turing Test either, since it isn't designed to be human. 65.93.12.101 (talk) 09:37, 24 March 2011 (UTC)
- Calling Watson an AI is WP:OR. Apart from the Turing test, there is no widely accepted test for artificial intelligence. The Turing test is not a necessary test but a sufficient test; the goal is not to chat, but to test for the presence of intelligence. pgr94 (talk) 10:44, 24 March 2011 (UTC)
- The Turing Test is not a test to determine if a system is an AI or not, it is a test to determine if an AI is human-like or not, if it is sufficiently advanced to pass for human. It has nothing to do with whether a system is an AI or not. It is not an objective test either. An AI is not a level of intelligence, it is a type of system. The Turing Test tests the level of intelligence, not whether a system is an AI or not. 65.93.12.101 (talk) 11:04, 24 March 2011 (UTC)
- Please see the Turing test article: "The Turing test is a test of a machine's ability to demonstrate intelligence." Anyway, this talk page is about Watson. If you have a reliable source that states that Watson is an artificial intelligence, please show us. Of course Watson uses AI techniques, but I'd be interested and surprised to see if any other experts call it an AI. That's why I think this is a bad choice of name. A question-answering system is the most accurate. pgr94 (talk) 11:34, 24 March 2011 (UTC)
- The Turing Test is not a test to determine if a system is an AI or not, it is a test to determine if an AI is human-like or not, if it is sufficiently advanced to pass for human. It has nothing to do with whether a system is an AI or not. It is not an objective test either. An AI is not a level of intelligence, it is a type of system. The Turing Test tests the level of intelligence, not whether a system is an AI or not. 65.93.12.101 (talk) 11:04, 24 March 2011 (UTC)
- Calling Watson an AI is WP:OR. Apart from the Turing test, there is no widely accepted test for artificial intelligence. The Turing test is not a necessary test but a sufficient test; the goal is not to chat, but to test for the presence of intelligence. pgr94 (talk) 10:44, 24 March 2011 (UTC)
- Passing the Turing Test is something that no AI does, virtually no AIs are designed to even function in a way as useless as engaging in chitchat and smalltalk. AIs are around in the world today, all over the place, even in software for your X-Box, these real world AIs are not designed to do anything like Turing Test tasks, which are costly ivory-tower research projects that don't make money. Hell, if you designed an AI to act exactly like a dog, it will never pass the Turing Test either, since it isn't designed to be human. 65.93.12.101 (talk) 09:37, 24 March 2011 (UTC)
- We'd probably all agree that Watson uses techniques from the field of artificial intelligence, but to call Watson an AI is opening a can of worms. Watson is unlikely to pass the Turing test and there is no other widely accepted test for artificial intelligence. Watson can be described as doing pretty well in a subject matter expert Turing test, but that's still not strong AI. See Philosophy of AI for more. pgr94 (talk) 09:30, 4 March 2011 (UTC)
- Support moving to Watson (computer). There's no need for a title like the present one that presumes things about artificial intelligence, and there's no need for a more specific disambiguation than "computer". — Gavia immer (talk) 10:07, 25 February 2011 (UTC)
- Right--a disambiguation parenthetical should be the shortest one possible necessary to provide full disambiguation. Here, there is no other Watson (computer), and "computer" is the shortest designation that fully gets the idea across that we're not talking about a person or a research center. Robert K S (talk) 12:39, 25 February 2011 (UTC)
- I change to Support Watson (computer) per above. --Cybercobra (talk) 18:18, 25 February 2011 (UTC)
- Comment: I think this move request cannot be reasonably decided until questions raised in #Does "Watson" refer to the software, or the software as running on specific hardware?,#Separate article and #Hardware are resolved, i.e. we need to be clear what we are actually referring to when we say "Watson". --Bill C (talk) 15:05, 27 February 2011 (UTC)
- IMO, seems pretty clear from IBM's description in the Overview section that the hardware is included. --Cybercobra (talk) 02:14, 1 March 2011 (UTC)
- Rename to Watson (question answering system) which is a slight variation to what is actually used in the article. Another option would be Watson (DeepQA) which is the project behind the system. Both of these avoid the issue of AI. Vegaswikian (talk) 23:39, 3 March 2011 (UTC)
- Isn't that unnecessarily specific/detailed though? --Cybercobra (talk) 01:00, 4 March 2011 (UTC)
- No. Either choice is really an accurate description. Short and inaccurate is not better then long and accurate. Vegaswikian (talk) 01:31, 4 March 2011 (UTC)
- How would Watson (computer) be inaccurate? --Cybercobra (talk) 09:58, 4 March 2011 (UTC)
- Suppose you took this Watson (computer) and reprogrammed it to say do your taxes, would you still call it Watson? Watson is more than a computer - it is a programmed computer that has a notable history. Roesser (talk) 15:23, 4 March 2011 (UTC)
- We don't name articles based on speculative hypotheticals. There is no justification for not renaming this article to Watson (computer), which is the simplest accurate unambiguous description of the article subject, as called for by article title conventions. Robert K S (talk) 01:47, 11 March 2011 (UTC)
- Suppose you took this Watson (computer) and reprogrammed it to say do your taxes, would you still call it Watson? Watson is more than a computer - it is a programmed computer that has a notable history. Roesser (talk) 15:23, 4 March 2011 (UTC)
- How would Watson (computer) be inaccurate? --Cybercobra (talk) 09:58, 4 March 2011 (UTC)
- Rename to Watson (DeepQA), Watson (IBM computer), or even Watson (Jeopardy playing computer). Some sort of specificity is warrented by an iconic name like Watson. I'd certainly hesitate to use something as base-level and generic as Watson (computer) for the simple fact that there are other possible name collisions. A minute on Google popped up that there already exists a Watson semantic search and a Watson remote phone activating system, both computing systems that people could reasonably be looking for. Darker Dreams (talk) 09:47, 14 March 2011 (UTC)
- No. Either choice is really an accurate description. Short and inaccurate is not better then long and accurate. Vegaswikian (talk) 01:31, 4 March 2011 (UTC)
- Watson is not a question answering system - it is an answer questioning system (as on Jeopardy).Roesser (talk) 15:23, 4 March 2011 (UTC)
- No, it is a question answering system that also has the ability to phrase the answer as a question. Other proposed applications like in the medical field will apparently not use the ability to phrase the answer as a question. Vegaswikian (talk) 19:19, 10 March 2011 (UTC)
- Isn't that unnecessarily specific/detailed though? --Cybercobra (talk) 01:00, 4 March 2011 (UTC)
- I vote for Watson (Artificial Intelligence) The phrase (Artificial Intelligence) is an appropriatly general term that effectively serves to disambiguate Watson. Roesser (talk) 15:23, 4 March 2011 (UTC)
- Mild support: I actually prefer Watson (artificial intelligence system) so the disambiguator directly describes the subject, but think Watson (artificial intelligence) is an improvement. I think question answering system (should be question-answering system actually) is unnecessarily specific. –CWenger (talk) 16:04, 4 March 2011 (UTC)
- Move to Watson (artificial intelligence) or even Watson (computing). The current disambiguator is wrong, but we should prefer a simpler and shorter one where it is available. Andrewa (talk) 13:57, 11 March 2011 (UTC)
- The following Alternatives section is unwarranted IMHO since many people have already spoken and may not return in time to see this. Any consensus decision should be based on all comments, including those already entered. Roesser (talk) 01:52, 12 March 2011 (UTC)
- Agree that the closing admin should take all comments into account, and I'm sure they will. I was just trying to make it a bit easier for them. If you don't want to participate, that's up to you. Please note that the relisting allows time for them to return, and may attract other editors as well, which I hope may clarify things even more. Andrewa (talk) 04:40, 12 March 2011 (UTC)
- There is or was another Watson in PC computers: the old Windows had an icon labelled "Dr.Watson", which was a side view of a man's head smoking a Sherlock Holmes pipe; if clicked, it showed a list of system errors that had happened recently. Anthony Appleyard (talk) 08:33, 12 April 2011 (UTC)
- Hence, we should avoid Watson (computing). --Cybercobra (talk) 22:50, 12 April 2011 (UTC)
Alternatives
I think we have rough consensus that the article should move, but not what to. Suggest that people sign up to their preference below. By all means indicate a second or third choice if you wish, but it's the primary that's most important. Feel free to add or restate reasons. Andrewa (talk) 21:26, 11 March 2011 (UTC)
- If your intent here is to set up a poll, aren't you thwarting that purpose by adding your signed name under numerous options? Robert K S (talk) 09:16, 12 March 2011 (UTC)
- No. I'm not acting as an uninvolved admin, and can't now, I'm involved! But the nominator of any RM is counted as supporting the RM (unless they specifically say otherwise, which does happen), and that doesn't count as thwarting that purpose. All I seek is to make the process go smoothly to a good conclusion, and I'm quite happy to be overruled! Andrewa (talk) 09:55, 12 March 2011 (UTC)
- ??? I'm suggesting that any poll would be much more useful if we limit to one signature per option. Otherwise, what we have is no different from a discussion, except that because there is no disincentive to comment on every choice, it becomes bloated. Robert K S (talk) 10:59, 12 March 2011 (UTC)
- What we are seeking is a consensus on a new name, set out so that it's clearly visible to the closing admin. You may be right; Comments on second and later choices are only of any significance if there's no clear first past the post winner, and so far there does seem to be a clear favourite. But even in that case they do no harm, and in other scenarios they might be very helpful. Andrewa (talk) 11:46, 12 March 2011 (UTC)
- ??? I'm suggesting that any poll would be much more useful if we limit to one signature per option. Otherwise, what we have is no different from a discussion, except that because there is no disincentive to comment on every choice, it becomes bloated. Robert K S (talk) 10:59, 12 March 2011 (UTC)
- No. I'm not acting as an uninvolved admin, and can't now, I'm involved! But the nominator of any RM is counted as supporting the RM (unless they specifically say otherwise, which does happen), and that doesn't count as thwarting that purpose. All I seek is to make the process go smoothly to a good conclusion, and I'm quite happy to be overruled! Andrewa (talk) 09:55, 12 March 2011 (UTC)
Watson (computer system)
- Acceptable if we get a consensus on it. Andrewa (talk) 21:31, 11 March 2011 (UTC)
- I guess this would also be acceptable. --Cybercobra (talk) 10:50, 12 March 2011 (UTC)
Watson (computer)
- 1st choice - simplest, informative and accurate. Andrewa (talk) 21:31, 11 March 2011 (UTC)
- 1st choice --Cybercobra (talk) 10:50, 12 March 2011 (UTC)
- The only choice the follows the guidelines and describes Watson as it is described in virtually every source. Robert K S (talk) 15:27, 12 March 2011 (UTC)
- against; this seems too similar to Watson#Computing in terms of possible ambiguity. Other equally simple and accurate distinguishers are available. Darker Dreams (talk) 14:46, 24 March 2011 (UTC)
Watson (question answering system)
- 1st choice: most accurate. pgr94 (talk) 12:21, 24 March 2011 (UTC)
- 1st choice: most accurate. Darker Dreams (talk) 14:37, 24 March 2011 (UTC)
- Seems unnecessarily specific. --Cybercobra (talk) 16:14, 24 March 2011 (UTC)
- Embarrassingly contradictory to its role on Jeopardy!, which was an answer questioning system. It’s primarily this context that defines Watson. Roesser (talk) 17:55, 24 March 2011 (UTC)
- Have you read the technical article? "Our results strongly suggest that DeepQA is an effective and extensible architecture [..] to rapidly advance the field of question answering (QA)." technical article.
- On your suggestion, I have read the technical article and appreciate that DeepQA architecture is what you say. However, this Wikipedia article is about Watson, which according to the technical article is motivated by Jeopardy! and was designed and tested for that express purpose. Therefore Watson was designed to question answers and would have been penalized during Jeopardy! if it gave answers. Anyways, a typical reader of Wikipedia knows Watson based on its participation on Jeopardy! and therefore expects the context to be a system that questions answers. Roesser (talk) 23:21, 24 March 2011 (UTC)
- Have you read the technical article? "Our results strongly suggest that DeepQA is an effective and extensible architecture [..] to rapidly advance the field of question answering (QA)." technical article.
Watson (information technology)
- Acceptable if we get a consensus on it. Andrewa (talk) 21:31, 11 March 2011 (UTC)
- Disfavor as inaccurate. --Cybercobra (talk) 10:50, 12 March 2011 (UTC)
- Highly ambiguous, there's the Dr. Watson program by Microsoft, for instance. 65.93.12.101 (talk) 09:39, 24 March 2011 (UTC)
Watson (artificial intelligence)
- Acceptable if we get a consensus on it. Andrewa (talk) 21:31, 11 March 2011 (UTC)
- Quite acceptable --Cybercobra (talk) 10:50, 12 March 2011 (UTC)
- Support' accurate, and more concise. 65.93.12.101 (talk) 09:38, 24 March 2011 (UTC)
- strongly against: This is WP:OR. Arguments already given above, no point repeating. pgr94 (talk) 12:18, 24 March 2011 (UTC)
- AI ≠ strong AI, and it's plainly within the field of AI. --Cybercobra (talk) 16:12, 24 March 2011 (UTC)
Watson (computing)
- 2nd choice - conventional. Andrewa (talk) 21:31, 11 March 2011 (UTC)
- Disfavor due to possible ambiguity: Watson#Computing --Cybercobra (talk) 10:50, 12 March 2011 (UTC)
- Highly ambiguous. There's the Dr. Watson program by Microsoft, for instance. 65.93.12.101 (talk) 09:38, 24 March 2011 (UTC)
Others - please specify
Keep the current title
- The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.
Does "Watson" refer to the software, or the software as running on specific hardware?
Sometimes the article speaks of Watson as a piece of software and sometimes it speaks of Watson as a package of software and hardware (as used on Jeopardy). Which is it? -- Dan Griscom (talk) 14:49, 19 February 2011 (UTC)
- I have wondered about this as well. If it's the latter, the article title probably needs to be changed. –CWenger (talk) 15:26, 19 February 2011 (UTC)
My understanding is that the term "Watson" primarily refers to the unique high-performance hardware. Of course, general-purpose computers are useless without software, so the DeepQA software, the Linux operating system, the natural language analysis programming, and much other included software is of almost equal value. The article as it exists now explains the hardware and software structure in a way that seems reasonable to me, but I admit to a background in both academic computer science and practical software engineering. David Spector (talk) 17:19, 4 November 2011 (UTC)
section on Implications on artificial intelligence
The material in this section is based on philosophical considerations that can not be established in fact. Whether Watson or other automata think really can not be ascertained by humans, because to do so involves thinking on our part, which is circular reasoning. The section is therefore not appropriate in this article and should IMHO be deleted. Roesser (talk) 00:21, 23 November 2011 (UTC)
- You seem to misunderstand what circular reasoning is. --Cybercobra (talk) 09:54, 23 November 2011 (UTC)
- I agree, "circular reasoning" is not the proper term. What I'm trying to say is that humans can not comprehend what it is that they are doing when they think, because such comprehension involves the process (thinking) that is to be comprehended. Only an outside agent could comprehend what humans do when they think. Perhaps there is a philosophical term that refers to this - can you think of one? Roesser (talk) 16:25, 23 November 2011 (UTC)
Please add basic physical information on Hardware and power requirements
How big a room does it occupy? How many KW does it consume? How many KW are consumed by externalities such as cooling? IMHO such basic physical parameters belong in section 1.1, even before it's more general info on TB, number of processors and RAM. Pawprintoz (talk) 07:05, 17 January 2012 (UTC)
- It's very well possible that such specifics aren't publicly available. --Cybercobra (talk) 08:18, 17 January 2012 (UTC)
At least some information must be available. For example the floorspace and power requirements of a single IBM Power 750 server must be widely known, as well as its nominal RAM. From this the minimum power consumption and 'floorspace' could be calculated. I can't see why IBM would be reticent in answering the questions either, but it would probably take more than an email from a stranger to elicit this information. I feel this sort of information is vital, although not immediately so. The entry on ENIAC says that it took up 167 m2 and consumed 150KW. Surely Watson will be seen historically as an equally important machine. Pawprintoz (talk) 01:58, 18 January 2012 (UTC)
File:Watson Jeopardy.jpg Nominated for speedy Deletion
An image used in this article, File:Watson Jeopardy.jpg, has been nominated for speedy deletion for the following reason: Wikipedia files with no non-free use rationale as of 3 March 2012
Don't panic; you should have time to contest the deletion (although please review deletion guidelines before doing so). The best way to contest this form of deletion is by posting on the image talk page.
To take part in any discussion, or to review a more detailed deletion rationale please visit the relevant image page (File:Watson Jeopardy.jpg) This is Bot placed notification, another user has nominated/tagged the image --CommonsNotificationBot (talk) 20:35, 3 March 2012 (UTC) |
Swearing
I would like to have the part about Watson's swearing added, but I can't find where to place it.
Possible phrasing: Watson's knowledge briefly included the contents of Urban Dictionary, but was purged when it began to answer researchers with the word "bullshit". A swearing filter has since been added.
--Auric talk 13:31, 12 January 2013 (UTC)
Beyond Jeopady!
This actual system used for the Jeopady! competition is well-defined, but this story is moving beyond that. The Jeopardy! part of the story is over. In case you are wondering: IBM's style was set with Deep Blue: once their system beats a world champion (Kasparov), the actual system is mothballed to a museum and IBM is smart enough to never be involved in any sort of re-match ever. IBM did the thing as a marketing ploy. So the idea of any IBM Inc. computer ever playing another game of Jeopady! against world champions is not going to happen. IBM just will not allow it. What I mean by this is that we have "Watson (computer)" (the actual machine at that time) and "Watson (computer system)" (the software system that is undergoing further software development and moving on to medicine and other fields). Should we have two articles? If so, what article named would best convey the distinction between the two subjects? Keep in mind that IBM will continue to call it "IBM Watson" for marketing purposes.--76.220.18.223 (talk) 14:28, 9 February 2013 (UTC)
GA Review
GA toolbox |
---|
Reviewing |
- This review is transcluded from Talk:Watson (computer)/GA1. The edit link for this section can be used to add comments to the review.
Reviewer: SpinningSpark 20:03, 10 November 2013 (UTC)
If there are no objections, I'll take this review. I'll note at the outset I've had no role in editing or creating this article. I welcome other editors at any stage to contribute to this review. I will spend a day familiarising myself with the article and then provide an assessment. Kind regards, LT910001 (talk) 01:12, 19 October 2013 (UTC)
Thanks for waiting. In conducting this review, I will:
- Provide an assessment using WP:GARC
- If this article does not meet the criteria, explain what areas need improvement.
- Provide possible solutions that may (or may not) be used to fix these.
Assessment
Rate | Attribute | Review Comment |
---|---|---|
1. Well-written: | ||
1a. the prose is clear, concise, and understandable to an appropriately broad audience; spelling and grammar are correct. | Readability is hampered by structure. Several other minor concerns. | |
1b. it complies with the Manual of Style guidelines for lead sections, layout, words to watch, fiction, and list incorporation. | ||
2. Verifiable with no original research: | ||
2a. it contains a list of all references (sources of information), presented in accordance with the layout style guideline. | Some sources do not have access dates; very occasional section does not have a source. | |
2b. reliable sources are cited inline. All content that could reasonably be challenged, except for plot summaries and that which summarizes cited content elsewhere in the article, must be cited no later than the end of the paragraph (or line if the content is not in prose). | ||
2c. it contains no original research. | ||
3. Broad in its coverage: | ||
3a. it addresses the main aspects of the topic. | Will evaluate after refactoring | |
3b. it stays focused on the topic without going into unnecessary detail (see summary style). | ||
4. Neutral: it represents viewpoints fairly and without editorial bias, giving due weight to each. | I recommend no pass on GAC 4 until issues concerning involvement with medical diagnosis are addressed as per my Commentary below. However, I think it should be easy for the editors to correct this in a week or two at most and therefore do not recommend failing the GA nomination if it would otherwise pass, instead of placing it on hold. EllenCT (talk) 08:16, 11 November 2013 (UTC) | |
5. Stable: it does not change significantly from day to day because of an ongoing edit war or content dispute. | ||
6. Illustrated, if possible, by media such as images, video, or audio: | ||
6a. media are tagged with their copyright statuses, and valid non-free use rationales are provided for non-free content. | ||
6b. media are relevant to the topic, and have suitable captions. | ||
7. Overall assessment. |
Commentary
At first blush this article can definitely improve to become a GA within a reasonable time span, however there are some outstanding issues:
- Several sources do not have access dates
- Readability is impacted by structure. Suggest you use the structure (or something similar) "Description: Hardware, software, processing" and "History: Development, Jeopardy" and "Future applications".
- There is a very large quote in the 'hardware' section, which could be to some extent integrated into text. This might be done by explaining the nature of components or using a secondary source to provide commentary on the hardware components' utility.
Kind regards, LT910001 (talk) 01:51, 20 October 2013 (UTC)
- No action in a week, am failing this review based on the reasons above. Would encourage renomination when this article's issues relating to readability have been addressed; this may include restructuring the article and addressing the comments above. Kind regards, LT910001 (talk) 04:48, 30 October 2013 (UTC)
- Ack! Oh, the endless embarrassment! This article has indeed been changed in response to my earlier comments. I've reversed the failure and put the review on hold. I'm going to go on a wikibreak and if it's not much trouble would ask you to find another reviewer to complete the review. Thanks for responding to the changes (I shouldn't have acted so rashly!) and I wish you all the best. LT910001 (talk) 04:56, 30 October 2013 (UTC)
Since the original reviewer seems to have gone away, I'll take a look at this one. The immediate problems that show up are,
- The disambiguation tool is showing two problems that need fixing
- The external links tool is showing two deadlinks. (added 10/11) It is not a requirement for GA, but I would strongly recommend that you archive your online sources at WebCite to protect them from linkrot and add a link to WebCite in the cite so that future editors can find them easily.
- A lot of the sources are blogs. Some might be ok on the "recognised expert" rule, but I need to take a closer look. (edit: detailed comments below)
SpinningSpark 19:33, 8 November 2013 (UTC)
- Thanks for taking over, Spinningspark. I'm present on Wikipedia at the moment, but I'm not able to reliably devote enough time for a proper GA review, and I'm not able to guarantee timeliness when responding. Thanks for taking over, LT910001 (talk) 00:49, 9 November 2013 (UTC)
Further comments:
- File:Watson Answering.jpg requires a fair use rationale for this article.
- File:DeepQA.svg. The link in the "other versions" field is not to another version on Wikimedia, but to the source diagram. This should rather be presented as a reference or else added to the source field e.g "Own work based on [...]". I also note that the diagram is very close to the source and the annotation is identical. This is uncomfortably close to be being a copyvio. I don't think I am inclined to fail it for GA because of this but it is open to challenge by others in the future.
- fn#5 gives a quote and links to the source but does not name the source.
- fn#7 does not link to a relevant article. Probably the page has been changed or else has gone dead. Does not seem to be necessary anyway as fn#3 is sufficient verification.
- fn#6 fails verification. It is supposed to be verifying the choice of human contestants but is written before the selection was made and only mentions one of them. Does not seem to be necessary anyway as fn#3 is sufficient verification.
- fn#22 needs a page number.
- The passage beginning "To provide a physical presence..." is cited to an article by one of the contestants (fn#25). This does not strike me as a reliable source for IBM's motivation in design choices, in particular the 42 threads claim.
- fn#41, the source is not named. It is also timing out for me and may possibly have gone dead.
- fn#62. Why is this source considered reliable?
- fn#69 fails verification. Probably, the page has been changed
- fn#73 is a bare url and requires a page number for verification
- fn#76 is essentially a marketing ad. I am not seeing what this is supposed to be verifying or what it is adding to the article.
- fn#77 provides a link but does not name the source
- fn#83 provides a link but does not name the source
- fn#84 provides a link but does not name the source
SpinningSpark 21:37, 10 November 2013 (UTC) to 10:49, 11 November 2013 (UTC)
- I have resolved most of the image use and verifiability issues that you addressed in this list. —Seth Allen (discussion/contributions), Monday, November 11, 2013, 17:15 U.T.C.
- fn#7 (new numbering) is dead
- You have not responded concerning fn#24 and the number 42. A cite from IBM on why IBM have done something would be better. Or else attribute the claim to Jennings in-article.
- fn#40 is still dead. This might be ok if the document exists other than online but there is not enough citation information given to be able to find it. Alternatively, if fn#14 has all the necessary information please provide page number.
- I need to see a response from you to the issue raised by EllenCT before I can pass this. SpinningSpark 18:57, 11 November 2013 (UTC)
- I have finished my work with this article. Now here is an outline of what I did: I provided access dates for those sources that lacked them, gave a fair use rationale to the non-free image that lacked it, and as to the "DeepQA" diagram, I discarded the link in the "Other versions" field and moved it to the source field ("Own work based on diagram found at http://www.aaai.org/Magazine/Watson/watson.php). For the links that did not name their sources, I converted them to the standard citation format (title, URL, author, publisher, publication date, and access date). I also removed those footnotes that failed verification (including the blog that you said should not be considered reliable, as well as a citation to copyvio content on YouTube), provided page numbers for footnotes 22, 73, and 40, and provided archived copies for the two dead links. For the claim to Jennings, I removed it from the citation, and now the article attributes the claim to him in the prose, in the second paragraph of the "Jeopardy preparation" section. And for the "Future applications" section, I gave that a significant, if not complete, overhaul to ensure neutrality; mention of Watson's involvement in medical diagnosis was removed from pre-existing statements and now there is a notice in the first paragraph of the "Healthcare" section that simply states: "Despite being developed and marketed as a 'diagnosis and treatment advisor,' Watson has never been actually involved in the medical diagnosis process, only in assisting with identifying treatment options for patients who have already been diagnosed." So, I guess I have covered nearly all the concerns that have been raised in this nomination. —Seth Allen (discussion/contributions), Tuesday, November 12, 2013, 21:41 U.T.C.
I'm concerned that the article repeats several IBM press releases which state that they were developing Watson to be involved in the medical diagnosis process, and in a few instances implies that it actually is so involved. However, IBM's detailed documentation makes it clear that the only implementations involved with healthcare are in "utilization management" (cost-benefit analysis concerning treatment for patients already diagnosed by an M.D.) and simply recommending treatment options. For example, see slide 7 of this presentation which indicates that the "Watson Diagnosis & Treatment Advisor" actually only "assists with identifying individualized treatment options for patients [already] diagnosed with cancer," which is corroborated by this case study which, though sub-headlined, "IBM Watson helps fight cancer with evidence-based diagnosis and treatment suggestions," again contains no statements that suggest Watson is actually involved in the diagnosis process and several which indicate it recommends treatment options for patients already diagnosed. I think this indicates some pretty heavy-handed attempts at manipulation on the part of IBM's marketing department, which border on outright deception, and I personally would never consider this article passing GAC 4 (neutrality) until the statements implying that Watson is involved with performing or assisting medical diagnoses are corrected to be consistent with the details of IBM's descriptive literature. If Watson were actually to assist in medical diagnosis, the potential legal liability for diagnosis errors would probably be vast, and since Watson's knowledge base that it uses to interpret natural language statements is crowdsourced, (including from Wikipedia editors!) indemnification against potential error and even natural language ambiguity is, I believe, a larger problem than what Watson has so far addressed. EllenCT (talk) 08:11, 11 November 2013 (UTC)
Passing for GA. Well done, a very interesting article. SpinningSpark 00:28, 13 November 2013 (UTC)
Watson Criticisms
If there was a section on criticism that would help with gathering of observations and the perspective on the systems. For example, I notice that the rules for participation by Watson were adjusted from what a human would see. There are other criticisms not voice here also. Jbottoms76 (talk) 15:27, 14 May 2013 (UTC)
This is not at all fair
It should be mentioned in the article that even though this is impressive, it is not really a fair comparison to humans because humans, in addition to Watson, have to deal with:
- Recognizing that "ready light" in order to buzz, with complex image processing, which Watson didn't have to do - HEAR the spoken words and convert them to "symbols", which Watson didn't have to do, either
So, while it's impressive that a huge multi-million dollar machine can now, in one single respect, with lots of advantages given, seemingly compete with humans, all of this should be mentioned in the pursuit of fairness ;-) — Preceding unsigned comment added by 31.4.245.145 (talk) 16:56, 29 January 2014 (UTC)
Watson's voice
Not a savvy Wikipedia user so forgive me if I'm doing this incorrectly, but a friend sent me a link to this article and suggested I chime in. I am Jeff Woodman, and although I signed a non-disclosure agreement with IBM, once I was "outed" by N.Y. Times readers as the voice of Watson, IBM gave me permission to confirm the fact, which I did during a radio interview with Lise Avery on her program Anything Goes With Lise Avery on WFDU FM.
I have no idea how what sort of verification is required, but if the gentlemen (DS and Robert KS) who seem to having a disagreement about whether or not to include the information in the article on Watson wish to contact me at jefflrfe@aol.com, I'll do my best to clear things up. Jefflrfe (talk) 21:59, 15 February 2014 (UTC) Jeff Woodman
COI declaration needed
I understand that an IBM employee assigned to this project has been editing this article. If so, the "connected contributor" infobox needs to be added to the top of this talk page. Ethically, the editor him/herself should be the one to do it. So, I will give them a chance to do it first. Cla68 (talk) 23:30, 10 February 2014 (UTC)
- After twice attempting to include the template, Huon twice undid my addition. Since nobody likes an edit war, perhaps Huon could come here and explain why the change is "unnecessary" (according to one of their edit summaries removing the template). 184.8.111.147 (talk) 19:43, 19 April 2014 (UTC)
- Because Fluffernutter hasn't edited this article since 2012. Why should the page be tagged for a conflict of interest someone may have who doesn't edit it? This is a solution lacking a problem. Huon (talk) 19:53, 19 April 2014 (UTC)
- I see no expiration date on the template documentation. Her last edit to this article was April 23, 2012, and she should have added the template to this talk page way back then, but did not. The template alerts Wikipedians that someone with a COI contributed here, so that they can then look for language that does not adhere to Wikipedia's neutral POV policy. 184.8.111.147 (talk) 20:25, 19 April 2014 (UTC)
- Except this article since passed a GA review. I expect any serious problems of non-neutrality would have been found and addressed at that time. If you think some unresolved NPOV problems exist, please provide an example instead of just adding the name of someone who is irrelevant to the current shape of the article to the talk page. Huon (talk) 21:26, 19 April 2014 (UTC)
RAM
Unsure of correct amount of RAM that Watson uses. One source in article states 16TB. Elsewhere in article it's quoted at 15TB. I believe the 15TB is probably completely erroneous--Alex Trebek mentioned on Day 1 that Watson had 15 trillion bytes of memory--which is equivalent to just under 14TB. Do we go with him or source [9]?
What is leg?GoPeter452 (talk) 20:48, 18 February 2011 (UTC)
The beginning of the document says hard disk storage, but everything is RAM storage
It has 15 Terabytes of memory and 20 terabytes of disk, clustered Rabimba (talk) 14:32, 26 June 2014 (UTC)
Trivial grammar quibble
Watson's differences with human players had generated conflicts...
Wouldn't that preferably be "...differences from human players..."?
☺ Dick Kimball (talk) 17:38, 4 November 2014 (UTC)
More software details
- A framework for merging and ranking of answers in DeepQA [5][6]
- Structured data and inference in DeepQA [7][8]
- Relation extraction and scoring in DeepQA [9][10]
- Fact-based question decomposition in DeepQA [11][12]
- Question analysis: How Watson reads a clue [13][14]
pgr94 (talk) 19:58, 7 November 2014 (UTC)
New sources available
- PI David Ferrucci answers viewer questions - includes some more info on the Toronto foul-up, Watson's heritage, etc. A fluffernutter is a sandwich! (talk) 20:21, 28 February 2011 (UTC)
- Game Show NewsNet interview with Todd Alan Crain - Contains interesting information, including a basic timeline, of Watson's training against Jeopardy! champions. Robert K S (talk) 13:08, 3 March 2011 (UTC)
- IBM Brings Watson to Africa - includes direct statements from IBM on how it plans to implement Watson on healthcare in Africa, with example of cervical cancer statistic and how Watson can alleviate situation. Aquach04 (talk) 21:49, 12 February 2016 (UTC)
Critique from an ex-IBM insider
https://medcitynews.com/2017/09/former-ibm-employee-ai-truth-needs-come/ 136.148.221.147 (talk) 17:36, 29 September 2017 (UTC)
Wikipedia
Anybody know if Wikipedia is being used in Watson?Smallman12q (talk) 20:24, 17 June 2010 (UTC)
- According to the Nova episode that aired this week ("The Smartest Machine on Earth"), yes. - dcljr (talk) 23:23, 10 February 2011 (UTC)
- Yes, according to A: This Computer Could Defeat You at 'Jeopardy!' Q: What is Watson? (5'min05"sec). Google may help to find a source. 140.120.55.63 (talk) 16:02, 17 February 2011 (UTC)
Yes. It is used in the system which played Jeopardy. It used both Wikipedia and dbPedia along with a number of other sources for jeopardy. But the system as a whole can take any data source. Rabimba (talk) 14:22, 26 June 2014 (UTC)
- In that case, why is this fact only mentioned in the lead, which should not have unique info? FunkMonk (talk) 04:11, 2 November 2017 (UTC)